以图形为中心的人工智能(Graph AI)在建模自然界中普遍存在的相互作用系统(从生物学的动态系统到粒子物理学)方面取得了显着成功。数据的异质性的增加,需要对可以结合多种电感偏见的图形神经体系结构。但是,将来自各种来源的数据组合起来是具有挑战性的,因为适当的归纳偏差可能会因数据模式而异。多模式学习方法融合了多个数据模式,同时利用跨模式依赖性来应对这一挑战。在这里,我们调查了以图形为中心的AI的140项研究,并意识到,使用图越来越多地将各种数据类型汇集在一起​​,并将其馈入复杂的多模型模型。这些模型分为图像,语言和知识接地的多模式学习。我们提出了基于此分类的多模式图学习的算法蓝图。该蓝图是通过选择适当的四个不同组件来处理多模式数据的最先进架构的方法。这项工作可以为标准化精致的多模式体系结构的设计铺平道路,以解决高度复杂的现实世界问题。
translated by 谷歌翻译
与痴呆症相关的认知障碍(CI)在全球范围内影响超过5500万人,并且每3秒钟以一个新病例的速度迅速增长。随着临床试验反复出现的失败,早期诊断至关重要,但是在低水平和中等收入国家中,全球75%的痴呆症病例未被诊断为90%。众所周知,当前的诊断方法是复杂的,涉及对医学笔记,大量认知测试,昂贵的脑部扫描或脊柱液体测试的手动审查。与CI相关的信息经常在电子健康记录(EHR)中找到,并且可以为早期诊断提供重要线索,但是专家的手动审查是繁琐的,并且容易发生。该项目开发了一种新型的最新自动筛选管道,用于可扩展和高速发现EHR中的CI。为了了解EHR中复杂语言结构的语言环境,构建了一个8,656个序列的数据库,以训练基于注意力的深度学习自然语言处理模型以对序列进行分类。使用序列级别分类器开发了基于逻辑回归的患者级别预测模型。深度学习系统的精度达到了93%,AUC = 0.98,以识别其EHR中没有较早诊断,与痴呆有关的诊断代码或与痴呆有关的药物的患者。否则,这些患者将未被发现或检测到太晚。 EHR筛选管道已部署在Neurahealthnlp中,这是一种用于自动化和实时CI筛选的Web应用程序,只需将EHR上传到浏览器中即可。 Neurahealthnlp更便宜,更快,更容易获得,并且胜过当前的临床方法,包括基于文本的分析和机器学习方法。它使得早期诊断可在稀缺的医疗服务中可行,但可访问的互联网或蜂窝服务。
translated by 谷歌翻译
痴呆症是一种神经退行性疾病,导致认知下降,并影响全世界超过5000万人。痴呆症是由医疗保健专业人士诊断的 - 只有患有痴呆症的四个人中只有一名诊断出来。即使制造诊断,也可能无法作为患者图表中的疾病(ICD)诊断码的结构化国际分类。与认知障碍(CI)有关的信息通常在电子健康记录(EHR)中发现,但专家临床医生票据的手工审查既耗时,往往容易出错。本票据的自动化挖掘为在EHR数据中标记有认知障碍患者的机会。我们开发了自然语言处理(NLP)工具,以识别具有认知障碍的患者,并证明语言背景提高了认知障碍分类任务的性能。我们微调我们的注意力深入学习模型,可以从复杂的语言结构中学习,并且相对于基线NLP模型的精度(0.93)大大提高(0.84)。此外,我们表明深度学习NLP可以成功识别没有痴呆相关的ICD代码或药物的痴呆症患者。
translated by 谷歌翻译
We introduce camouflaged data poisoning attacks, a new attack vector that arises in the context of machine unlearning and other settings when model retraining may be induced. An adversary first adds a few carefully crafted points to the training dataset such that the impact on the model's predictions is minimal. The adversary subsequently triggers a request to remove a subset of the introduced points at which point the attack is unleashed and the model's predictions are negatively affected. In particular, we consider clean-label targeted attacks (in which the goal is to cause the model to misclassify a specific test point) on datasets including CIFAR-10, Imagenette, and Imagewoof. This attack is realized by constructing camouflage datapoints that mask the effect of a poisoned dataset.
translated by 谷歌翻译
Anomaly analytics is a popular and vital task in various research contexts, which has been studied for several decades. At the same time, deep learning has shown its capacity in solving many graph-based tasks like, node classification, link prediction, and graph classification. Recently, many studies are extending graph learning models for solving anomaly analytics problems, resulting in beneficial advances in graph-based anomaly analytics techniques. In this survey, we provide a comprehensive overview of graph learning methods for anomaly analytics tasks. We classify them into four categories based on their model architectures, namely graph convolutional network (GCN), graph attention network (GAT), graph autoencoder (GAE), and other graph learning models. The differences between these methods are also compared in a systematic manner. Furthermore, we outline several graph-based anomaly analytics applications across various domains in the real world. Finally, we discuss five potential future research directions in this rapidly growing field.
translated by 谷歌翻译
State-of-the-art pre-trained language models (PLMs) outperform other models when applied to the majority of language processing tasks. However, PLMs have been found to degrade in performance under distribution shift, a phenomenon that occurs when data at test-time does not come from the same distribution as the source training set. Equally as challenging is the task of obtaining labels in real-time due to issues like long-labeling feedback loops. The lack of adequate methods that address the aforementioned challenges constitutes the need for approaches that continuously adapt the PLM to a distinct distribution. Unsupervised domain adaptation adapts a source model to an unseen as well as unlabeled target domain. While some techniques such as data augmentation can adapt models in several scenarios, they have only been sparsely studied for addressing the distribution shift problem. In this work, we present an approach (MEMO-CL) that improves the performance of PLMs at test-time under distribution shift. Our approach takes advantage of the latest unsupervised techniques in data augmentation and adaptation to minimize the entropy of the PLM's output distribution. MEMO-CL operates on a batch of augmented samples from a single observation in the test set. The technique introduced is unsupervised, domain-agnostic, easy to implement, and requires no additional data. Our experiments result in a 3% improvement over current test-time adaptation baselines.
translated by 谷歌翻译
Building an AI agent that can design on its own has been a goal since the 1980s. Recently, deep learning has shown the ability to learn from large-scale data, enabling significant advances in data-driven design. However, learning over prior data limits us only to solve problems that have been solved before and biases data-driven learning towards existing solutions. The ultimate goal for a design agent is the ability to learn generalizable design behavior in a problem space without having seen it before. We introduce a self-learning agent framework in this work that achieves this goal. This framework integrates a deep policy network with a novel tree search algorithm, where the tree search explores the problem space, and the deep policy network leverages self-generated experience to guide the search further. This framework first demonstrates an ability to discover high-performing generative strategies without any prior data, and second, it illustrates a zero-shot generalization of generative strategies across various unseen boundary conditions. This work evaluates the effectiveness and versatility of the framework by solving multiple versions of two engineering design problems without retraining. Overall, this paper presents a methodology to self-learn high-performing and generalizable problem-solving behavior in an arbitrary problem space, circumventing the needs for expert data, existing solutions, and problem-specific learning.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Privacy, security, and bandwidth constraints have led to federated learning (FL) in wireless systems, where training a machine learning (ML) model is accomplished collaboratively without sharing raw data. Often, such collaborative FL strategies necessitate model aggregation at a server. On the other hand, decentralized FL necessitates that participating clients reach a consensus ML model by exchanging parameter updates. In this work, we propose the over-the-air clustered wireless FL (CWFL) strategy, which eliminates the need for a strong central server and yet achieves an accuracy similar to the server-based strategy while using fewer channel uses as compared to decentralized FL. We theoretically show that the convergence rate of CWFL per cluster is O(1/T) while mitigating the impact of noise. Using the MNIST and CIFAR datasets, we demonstrate the accuracy performance of CWFL for the different number of clusters across communication rounds.
translated by 谷歌翻译
Recent developments in the methods of explainable AI (XAI) methods allow researchers to explore the inner workings of deep neural networks (DNNs), revealing crucial information about input-output relationships and realizing how data connects with machine learning models. In this paper we explore interpretability of DNN models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC). We review a subset of existing top tagger models and explore different quantitative methods to identify which features play the most important roles in identifying the top jets. We also investigate how and why feature importance varies across different XAI metrics, how feature correlations impact their explainability, and how latent space representations encode information as well as correlate with physically meaningful quantities. Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models. We additionally illustrate the activity of hidden layers as Neural Activation Pattern (NAP) diagrams and demonstrate how they can be used to understand how DNNs relay information across the layers and how this understanding can help to make such models significantly simpler by allowing effective model reoptimization and hyperparameter tuning. By incorporating observations from the interpretability studies, we obtain state-of-the-art top tagging performance from augmented implementation of existing network
translated by 谷歌翻译